Home |
| Latest | About | Random
# Formal definition of limits. This is a sort of rehashing of what we discussed on Thursday, as well as some additional examples worked out on how to prove $\lim_{x\to a}f(x)=L$ for basic functions (linear, polynomial, rational functions, and roots). Jump to. - [Some forewords for a formal definition.](#forewords) - [The $\epsilon$-$\delta$ definition of limits.](#definition) - [Some specific cases and examples.](#examples) - [Linear case.](#linear) - [Polynomial case.](#polynomial) - [Rational function case.](#rational) - [Root case.](#root) - [Final remarks.](#remarks) ### Some forewords for a formal definition. By now you should have some intuitive ideas of limits, when they exists, and how to compute them for our basic cases, sometimes needing algebraic techniques such as factoring, long division, or algebraic conjugates. Our intuitive understanding of limits is as follows: For a function $f(x)$, we say $\displaystyle\lim_{x\to a}f(x) =L$ if the function $f(x)$ approaches the number $L$ as $x$ approaches, but not equal to, $a$. We would also write this as $f(x)\to L$ as $x\to a$. Graphically, this means if we move on the graph of $y=f(x)$, as the $x$-coordinate gets closer and closer to $a$, but not actually being on $a$, then the graph has height getting closer and closer to $L$. We also have some situations where the limit $\displaystyle\lim_{x\to a}f(x)$ that does not exist: (1) Where there is a jump at $x=a$; (2) where there is a vertical asymptote at $x=a$ (the graph goes to infinity); (3) where near $x=a$ the function oscillates infinitely often with amplitude that does not go to zero. These algebraic and intuitive ways of determining limits **still largely works**, and didn't bother mathematician (that much) before 1800s or so. Since many functions people considered are **piecewise elementary functions** (and we still do!). Piecewise refers to the function having finitely many pieces defined over different intervals. And an elementary function is a function that is a composition of finitely many of some of the following functions: - Polynomials - Rational functions - Trigonometric functions - Exponential and logarithm functions - Root functions - And their inverses For instance something like $\displaystyle f(x)= \frac{x^{2}+\sin(e^{x})}{\frac{x}{\sqrt{x^{3}+1}}+\cos(\ln(x))}$ is an elementary function. Although it may look complicated, the limiting behavior of such elementary function is quite tame on its domain. That is we have: > If $f(x)$ is an elementary function, and $x=a$ is a point in the domain of $f$, then $\displaystyle\lim_{x\to a}f(x)=f(a)$. That is we simply substitute $x=a$ into $f(x)$. This is because these elementary functions are **continuous** at every point it is defined. (We will talk about continuity later, but it intuitively means you can draw its graph through $x=a$ without lifting your pencil). If the function is piecewise, then we just analyze at each point where it changes behavior. And life was simple enough, and calculus goes on. (Ok, despite having the name elementary functions, some of these are still very interesting in their own rights, but not **wildly monstrous**.) At the turn of 1800s, people started considering functions that are a sum of infinitely many functions. Now "adding infinitely many things" wasn't entirely new at that time, but it really started to raise some questions and challenged mathematicians' understanding of what it really means for a function to be continuous (and differentiable, and integrable, for that matter). People are coming up with curious functions. Many example arises from **Fourier series** (which was motivated originally to solve the heat equation problem), for instance the Fourier series of a square wave: $$ \sum_{k=1}^{\infty} \frac{1}{2k-1}\sin((2k-1)\pi x) $$ Here we are adding infinitely many sine functions together. Already that sounds outlandish and certainly deserves a proper definition (which of course mathematicians gave later, but you will also see something similar called **infinite series** in calculus 2.) Observe that if we only add up **finitely many** of those sine functions $\sum_{k=1}^{N} \frac{1}{2k-1}\sin((2k-1)\pi x)$, where $N$ is finite, we just have an elementary function! Below are plots of various finite $N$ values, for $N=2,10,100$, which we see that the limit of the functions as $x\to 0$ is always zero: ![[smc-fall-2023-math-7/week-5/---files/Pasted image 20230930093924.png]] But if we were to add up "infinitely" many of these sine functions, then we get something whose limit at $x=0$ does not exist! People pushed it further and created many **monsters of real analysis** (try googling this phrase), and many examples of pathological functions that challenged our understanding of limits, continuity, and differentiability. If any thing, a precise definition is needed so we can at least properly talk about limits. And in early 1800s, mathematicians such as Bolzano, Cauchy, and Weierstrass help developed our limit definition today. (By the way, just because we have a formal definition, it does not mean it made the task of determining limits existing or not easier! Nor does it make the monsters go away. It just that we have a set of precise language so we can describe things exactly, when possible.) ### The $\epsilon$-$\delta$ definition of limits. We now give the formal definition: > **Definition.** > For a function $f(x)$, we say $\displaystyle\lim_{x\to a}f(x)=L$ if for every $\epsilon > 0$, there exists $\delta > 0$ such that whenever $0 < |x-a| <\delta$, we have $|f(x)-L| < \epsilon$. Here is a translation: If we say $\displaystyle\lim_{x\to a}f(x)=L$, then for any positive error tolerance $\epsilon > 0$ we set around the value $L$, we can find a $\delta$-window to the left and to the right of $x=a$, but not at $x=a$, such that the value of the function $f(x)$ over this $\delta$-window is always within $\epsilon$ of $L$ (that is, $|f(x)-L| < \epsilon$). What does this mean graphically? First let us draw the graph of $f(x)$. If we indeed have $\displaystyle\lim_{x\to a} f(x) = L$, then for any $\epsilon$-tube we draw around $y=L$, we can always pick a $\delta$-window to the left and right of $x=a$, such that the graph of $f(x)$ over that window lies in the $\epsilon$-tube: ![[smc-fall-2023-math-7/week-5/---files/week-5C-problems 2023-09-28 15.57.55.excalidraw.svg]] If we can achieve this for every $\epsilon > 0$, to come up with such a $\delta$, then we indeed have $\lim_{x\to a}f(x) = L$. In principle, $\delta = \delta(\epsilon)$ is a **function** of $\epsilon$, as it is a response to whatever $\epsilon$ that is chosen. So if we are to prove $\displaystyle\lim_{x\to a}f(x) = L$, we need to be able to produce a $\delta$ for any $\epsilon$ thrown at us, such that our function over the $\delta$-window lives in the $\epsilon$-tube. In general, we want **small enough** $\delta$. So a proof of $\displaystyle\lim_{x\to a}f(x)=L$ would look something like this: >For $\epsilon > 0$, take $\delta =$ ........... , >such that when $0 < |x-a|<\delta$, >which implies.... >.... >.... >.... >.... >we get $|f(x)-L| < \epsilon$. $\blacksquare$ (where we would need to figure what $\delta=\delta(\epsilon)$ ought to be, and the relevant algebraic work to deduce the final conclusion $|f(x)-L| < \epsilon$. To figure these out would often require some scrap work.) Let us discuss again the first example we've given in class. **Example.** Prove for $f(x)= 2x + 1$, $\displaystyle\lim_{x\to 2} (2x+1)= 5$. To prove this, we need to show that for any $\epsilon > 0$ thrown at us, we can produce a $\delta > 0$ such that the function $f(x)$ over the $\delta$-window around $x=2$, which is $(2-\delta,2)\cup(2,2+\delta)$, will lie inside the $\epsilon$-tube around $5$. Say $\epsilon = 1$, what $\delta$ should we pick? By making a diagram we can graphically solve for this $\delta$: ![[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 10.57.56.excalidraw.svg]] %%[[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 10.57.56.excalidraw.md|🖋 Edit in Excalidraw]], and the [[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 10.57.56.excalidraw.dark.svg|dark exported image]]%% Here we see that taking $\delta = 0.5$ works for this $\epsilon = 1$. Great! By repeating this process, we see that we can make the following choices: $$ \begin{array}{c|c} \epsilon & \delta \\ \hline 1 & 0.5 \\ 0.5 & 0.25 \\ 0.25 & 0.125 \\ \vdots & \vdots \end{array} $$ This is all good but we need to be able to have a $\delta$ ready for any positive $\epsilon$. So we need something more general. But indeed, by making a diagram for an arbitrary $\epsilon > 0$, we can solve for the end points of the graph of $f(x)=2x+1$ in the $\epsilon$-tube to be $\displaystyle 2-\frac{\epsilon}{2}$ and $\displaystyle 2+ \frac{\epsilon}{2}$, which shows $\displaystyle \delta = \frac{\epsilon}{2}$ works.![[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 10.54.14.excalidraw.svg]] %%[[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 10.54.14.excalidraw.md|🖋 Edit in Excalidraw]], and the [[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 10.54.14.excalidraw.dark.svg|dark exported image]]%% So a formal proof would be as follows: Proof of $\displaystyle\lim_{x\to 2} (2x+1)=5$. $\blacktriangleright$ Let $\epsilon > 0$, take $\delta = \displaystyle\frac{\epsilon}{2}$, such that when $0 < |x-2| < \delta$, which implies $\displaystyle |x-2| < \frac{\epsilon}{2}$, which implies $|2x-4| < \epsilon$, we get $|(2x+1) - 5| < \epsilon$. $\blacksquare$ Here is another simple example. **Example.** Prove for $f(x) = 5-3x$, we have $\displaystyle\lim_{x\to 4} (5-3x) =-7$. Before we prove it, we do a similar scrap work. First our proof would look something like this: For $\epsilon > 0$, take $\delta =$........, such that when $0 < |x-4| < \delta$, .... .... .... we get $|(5-3x) - (-7)| < \epsilon$. Now, imagine we have some $\epsilon$-tube drawn around $-7$, what $\delta$-window should we pick around $x=4$ so our function will live in the $\epsilon$-tube? A diagram helps here: ![[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 11.57.22.excalidraw.svg]] %%[[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 11.57.22.excalidraw.md|🖋 Edit in Excalidraw]], and the [[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 11.57.22.excalidraw.dark.svg|dark exported image]]%% If we solve for the left edge of the function inside the $\epsilon$-tube, we would need to solve for $x$ in $-7+\epsilon=5-3x$, which gives $\displaystyle x = 4 - \frac{\epsilon}{3}$. Hence we see that a choice of $\delta = \displaystyle\frac{\epsilon}{3}$ should do the job. Now we write our proof. $\blacktriangleright$ Proof. For $\epsilon > 0$, take $\displaystyle \delta = \frac{\epsilon}{3}$, such that when $\displaystyle 0 < |x - 4| < \delta$, we have $\displaystyle |x-4| < \frac{\epsilon}{3}$, which implies $|3x-12| < \epsilon$, which implies $|12-3x| < \epsilon$, which implies $|(5-3x)+7| < \epsilon$, and we get $|(5-3x) - (-7)| <\epsilon$. Hence $\displaystyle\lim_{x\to 4}(5-3x)=-7$ as claimed. $\blacksquare$ ### Some specific cases and examples. Now in proving the limit of something, which amounts to solving for an appropriate $\delta$ for a given $\epsilon$, the graphical approach works provided that we can solve for the end-points of the graph of $f(x)$ inside the specified $\epsilon$-tube. But what if the situation is more complicated than a linear function? Indeed, in general, this task is not easy, but for basic elementary functions such as polynomials, rational functions, and roots, we can manage by using algebra and by controlling the size of "junk terms", as we shall see. Nevertheless, we do have a nice observation about linear functions as pointed out in class. **Linear case.** >If $f(x) = Ax + B$, where $A \neq 0$, then in proving $\displaystyle\lim_{x\to a}f(x)=L$, for $\epsilon > 0$ we can take $\displaystyle\delta = \frac{\epsilon}{A}$ and it will do the job. >And if $A = 0$, so that $f(x) = B$ is a constant function, then any positive $\delta > 0$ would do. Say $\delta = 1$. ![[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 12.52.53.excalidraw.svg]] %%[[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 12.52.53.excalidraw.md|🖋 Edit in Excalidraw]], and the [[smc-fall-2023-math-7/week-5/---files/formal-definition-of-limits 2023-09-30 12.52.53.excalidraw.dark.svg|dark exported image]]%% Indeed this is the case when $A\neq 0$: When $x=a$, the proposed limit $L=Ax+B$. Solving for the edges of the graph, on one hand we have $L-\epsilon = (Aa+B)-\epsilon =Ax+B$, so $\displaystyle x = a - \frac{\epsilon}{A}$, and on the other hand we have $L+\epsilon = (Ax+B)+\epsilon = Ax+B$, so $\displaystyle x = a+ \frac{\epsilon}{A}$. Whence we see taking $\displaystyle\delta = \frac{\epsilon}{A}$ would make the proof work. And when $A=0$, the function is **constant function**, which is just a horizontal line. So in this case any positive $\delta$-window will give a graph that fits in any positive $\epsilon$-tube. So we can just take $\delta$ to be any positive number we like, say $\delta = 1$. Now if the function is not linear, graphical methods may or may not be as simple to deal with. We will discuss three situations to get around it. **Polynomial case.** Here is a general strategy: In dealing with non-linear cases to prove $\displaystyle\lim_{x\to a}f(x)=L$, recall that we need to choose a $\delta$ such that whenever $0 < |x-a| < \delta$ we will have $|f(x)-L| < \epsilon$. So what we can do is **reverse engineer** from $|f(x) -L | <\epsilon$ to see what can we say about $|x-a|$ to infer the relation of $\delta$ from $\epsilon$. In the algebraically "nice" cases (polynomial, rational functions, and roots), we should be able to extract the term $|x-a|$ from $|f(x)-L| < \epsilon$ nicely by using enough algebraic methods (factoring, long division, conjugates). However in doing so we may get some "additional junk terms" that contains the independent variable $x$, which is not desirable for $\delta$. To deal with this, we will impose an **additional condition** on $|x-a|$ so we can **control the size of this junk term**. And then we simply take the minimum with this imposed condition to get $\delta$. Let us illustrate an example. **Example.** For $f(x) = x^{3}-2x+1$, prove $\displaystyle\lim_{x\to3} (x^{3}-2x+1)=22$. The structure of the proof would look like this: $\blacktriangleright$ For $\epsilon > 0$, take $\delta =$......., such that when $0 < |x-3| < \delta$, .... .... .... we get $|(x^{3}-2x+1)-22| < \epsilon$. $\blacksquare$ First some scrap work. For a given $\epsilon > 0$, we want to eventually be able to deduce that $|(x^{3}-2x+1) -22| < \epsilon$. Let us see if we can get $|x-3|$ to show up somewhere from that inequality. Notice $(x^{3}-2x+1)-22=x^{3}-2x-21$, and by long division with $x-3$, we get $$ \begin{array}{} \\ & & x^{2} &+ 3x & +7\\ \hline x-3 & | & x^{3} & & -2x & -21 \\ & -)& x^{3} & - 3x^{2} \\\hline & & & 3x^{2} & -2x & -21 \\ & & -) & 3x^{2} & -9x \\\hline & & & & 7x & -21 \\ & & & & 7x & -21 \\\hline & & & & & 0 \end{array} $$ so to want $|(x^{3}-2x+1) -22| < \epsilon$ is to want $|x-3| \cdot |x^{2} + 3x + 7| < \epsilon$, namely we desire $\displaystyle |x-3| < \frac{\epsilon}{|x^{2}+3x+7|}$. Now typically if we get an expression such as $|x-3| < C \epsilon$, where $C$ is a constant number, then we can just take $\delta = C\epsilon$. But here we instead have some **junk term** $\displaystyle\frac{1}{|x^{2}+3x+7|}$ instead of a constant number $C$. So how do we deal with this? The trick is to **impose an additional condition** on $|x-3|$, so we can **control the size** of this junk term $\displaystyle\frac{1}{|x^{2}+3x+7|}$. Here is one way to do it. Suppose further that $|x-3| < 1$ (here the choice of $1$ is any "sufficiently small number" that will make the following argument work), then notice that $$ \begin{align*} |x-3| < 1 \\ \implies -1 < x-3 < 1 \\ \implies 2 < x<4 \\ \implies 23< x^{2}+3x+7 < 35 \\ \implies \frac{1}{35} < \frac{1}{|x^{2}+3x+7|} < \frac{1}{23} \\ \implies \frac{\epsilon}{35} < \frac{\epsilon}{|x^{2}+3x+7|} < \frac{\epsilon}{23} \end{align*} $$ So if $|x-3| < 1$ and we take $\displaystyle |x-3| < \frac{\epsilon}{35}$, then we would get $\displaystyle |x-3| < \frac{\epsilon}{|x^{2}+3x+7|}$, which leads back to $|(x^{3}-2x+1) -22| < \epsilon$ ! So what $\delta$ should we pick? We want it to be no greater than $1$ nor $\displaystyle\frac{\epsilon}{35}$ for this to happen. Hence taking $\displaystyle \delta = \min(1, \frac{\epsilon}{35})$ would do the job. Now we are done with the scrap work, we write down the proof: $\blacktriangleright$ Proof of $\displaystyle\lim_{x\to3} (x^{3}-2x+1)=22$. For $\epsilon > 0$, take $\displaystyle \delta = \min(1, \frac{\epsilon}{35})$, such that when $0 < |x-3| < \delta$, we have $|x-3| < 1$ and $\displaystyle |x-3| < \frac{\epsilon}{35}$, and since $|x-3| < 1$ implies $\displaystyle\frac{\epsilon}{35} < \frac{\epsilon}{|x^{2}+3x+7|}$ (from scrap work), we then have $\displaystyle |x-3| < \frac{\epsilon}{|x^{2}+3x+7|}$, which implies $|x-3| \cdot |x^{2}+3x+7| < \epsilon$, which implies $|x^{3}-2x-21| < \epsilon$, and we get $|(x^{3}-2x+1) -22| < \epsilon$. $\blacksquare$ **Remark.** Again, the choice of the additional condition of $|x-3| < 1$ is anything that is small enough to make the algebra to work out. If instead of $1$ we take something too large, say $|x-3| < 100$, then we have $-97 < x < 103$. The opposing signs here make the bounding of $x^{2} +3x+7$ less wieldy. This is indication that our choice of 100 is too big. Generally impose something small, like 1, or 0.1, etc. **Rational function case.** We shall follow the same strategy to pick $\delta$: > (1) Start from $|f(x) - L | <\epsilon$, and see if we can extract the term $|x-a|$ by algebra. > (2) Once we have $|x-a| < (\text{junk term}) \cdot \epsilon$, then we are in business. > (3) Impose an additional condition on $|x-a| < D$, where $D$ is an appropriately small number so we can bound the $(\text{junk term})$ from below, say $C < (\text{junk term})$. With this additional condition, we would then have $C\epsilon < (\text{junk term)}\cdot \epsilon$. > (4) Finally with these two bounds, take $\delta =\min(D,C \epsilon)$. **Example.** For $\displaystyle f(x) = \frac{x^{3}+x-2}{x+3}$, prove $\displaystyle\lim_{x\to 2} \frac{x^{3}+x-2}{x+3} = \frac{8}{5}$. Scrap work. We reverse engineer from $\displaystyle \left| \frac{x^{3}+x-2}{x+3} - \frac{8}{5}\right| < \epsilon$ and try to extract the term $|x-2|$ by algebra. Note $$ \begin{align*} \left| \frac{x^{3}+x-2}{x+3} - \frac{8}{5}\right| < \epsilon \\ \iff \left| \frac{5(x^{3}+x-2) - 8(x+3)}{5(x+3)}\right| < \epsilon \\ \iff \left| \frac{5x^{3} - 3x-34}{5(x+3)}\right| < \epsilon \\ \stackrel{!!}{\iff} \frac{|x-2|\cdot|5 x^2 + 10 x + 17|}{|5(x+3)|} < \epsilon \\ \iff |x-2| < \left| \frac{5(x+3)}{5x^{2}+10x+17}\right| \epsilon \end{align*} $$ where the $!!$ step is done by long division / factoring. Here we get some junk term $\displaystyle \left| \frac{5(x+3)}{5x^{2}+10x+17}\right|$, which we will try to bound it from below by imposing an additional condition on $|x-2|$. Suppose $|x-2| < 1$, then $1 < x < 3$. In this case $20 < |5(x+3)| < 30$ and $32 < |5x^{2}+10x+17| < 92$. So when $|x-2| < 1$, we have lower bound $$ \frac{20}{92} < \left| \frac{5(x+3)}{5x^{2}+10x + 17} \right| $$(think about why!!), and so $\displaystyle \frac{20}{92}\epsilon < \left| \frac{5(x+3)}{5x^{2}+10x + 17} \right| \epsilon$. Hence we take $\displaystyle \delta = \min(1, \frac{20}{92}\epsilon)$. Now we write the proof: $\blacktriangleright$ Proof. For $\epsilon > 0$, take $\displaystyle \delta = \min(1, \frac{20}{92}\epsilon)$, such that when $0 < |x-2| < \delta$, we have $|x-2| < 1$ and $\displaystyle |x-2| < \frac{20}{92}\epsilon$. And when $|x-2| < 1$, we have $\displaystyle \displaystyle \frac{20}{92}\epsilon < \left| \frac{5(x+3)}{5x^{2}+10x + 17} \right| \epsilon$ (from scrap work), so we get $\displaystyle |x-2| < \left| \frac{5(x+3)}{5x^{2}+10x + 17} \right| \epsilon$, which gives $\displaystyle \frac{|x-2|\cdot|5 x^2 + 10 x + 17|}{|5(x+3)|} < \epsilon$, which gives $\displaystyle \left| \frac{5x^{3} - 3x-34}{5(x+3)}\right| < \epsilon$, which gives $\displaystyle \left| \frac{5(x^{3}+x-2) - 8(x+3)}{5(x+3)}\right| < \epsilon$, which gives $\displaystyle\left| \frac{x^{3}+x-2}{x+3} - \frac{8}{5}\right| < \epsilon$, as desired. $\blacksquare$ **Remark.** Most of the algebraic work in the proof are just re-writing from our scrap work but backwards! Here the additional condition I pick $1$ and it didn't give any trouble. If it doesn't work, use a smaller choice such as $0.1$, or smaller. **Root case.** If you have followed along to now, the strategy remains the same. But we just need to use conjugates to help extract $|x-a|$ from $|f(x)-L| < \epsilon$. **Example.** For $f(x) = \sqrt{3x+2}$, prove $\displaystyle\lim_{x\to 4} \sqrt{3x+2} = \sqrt{14}$. Some scrap work. We want to extract the term $|x-4|$ from $|\sqrt{3x+2}-\sqrt{14}| < \epsilon$. Multiplying both sides by the conjugate $\sqrt{3x+2}+ \sqrt{14}$, we get $$ \begin{align*} \left|\sqrt{3x+2}-\sqrt{14}\right|\cdot \left|\sqrt{3x+2}+\sqrt{14}\right| < \left|\sqrt{3x+2}+\sqrt{14}\right| \cdot\epsilon \\ \iff \left|3x+2-14\right| < \left|\sqrt{3x+2} + \sqrt{14}\right|\cdot \epsilon \\ \iff|3x-12| < \left|\sqrt{3x+2} + \sqrt{14}\right|\cdot \epsilon \\ \iff |x-4| < \frac{1}{3} \left|\sqrt{3x+2} + \sqrt{14}\right|\cdot \epsilon \end{align*} $$Now we bound the junk term from below by imposing an additional condition on $|x-4|$, say $|x-4| < 1$. In this case $3 < x < 5$, so $$ \begin{align*} \sqrt{11} + \sqrt{14 }< \left|\sqrt{3x+2} + \sqrt{14}\right| \\ \implies \frac{\sqrt{11} + \sqrt{14 }}{3} \epsilon < \frac{1}{3 }\left|\sqrt{3x+2} + \sqrt{14}\right|\epsilon \end{align*} $$ This suggest we take $\delta = \min(1,\frac{\sqrt{11} + \sqrt{14 }}{3} \epsilon)$. Now we write the proof. $\blacktriangleright$ Proof of $\displaystyle\lim_{x\to 4} \sqrt{3x+2} = \sqrt{14}$. Let $\epsilon > 0$, take $\delta = \min(1,\frac{\sqrt{11} + \sqrt{14 }}{3} \epsilon)$, such that when $0 < |x-4| < \delta$, we have $|x-4| < 1$ and $|x-4| < \frac{\sqrt{11} + \sqrt{14 }}{3} \epsilon$. And when $|x-4| < 1$, we have $\frac{\sqrt{11} + \sqrt{14 }}{3} \epsilon < \frac{1}{3 }\left|\sqrt{3x+2} + \sqrt{14}\right|\epsilon$ (from scrap work), so $|x-4| < \frac{1}{3 }\left|\sqrt{3x+2} + \sqrt{14}\right|\epsilon$, which implies $|3x-12| < \left|\sqrt{3x+2} + \sqrt{14}\right|\cdot \epsilon$, which implies $\left|3x+2-14\right| < \left|\sqrt{3x+2} + \sqrt{14}\right|\cdot \epsilon$, which implies $\left|\sqrt{3x+2}-\sqrt{14}\right|\cdot \left|\sqrt{3x+2}+\sqrt{14}\right| < \left|\sqrt{3x+2}+\sqrt{14}\right| \cdot\epsilon$, which implies $|\sqrt{3x+2}-\sqrt{14}| < \epsilon$, as desired. $\blacksquare$ ### Some final remark. Hopefully you see how this works for our simple cases of functions. A proof of $\displaystyle\lim_{x\to a}f(x)=L$ should read something like this: >For $\epsilon > 0$, take $\delta =$ ........... , >such that when $0 < |x-a|<\delta$, >which implies.... >.... >.... >.... >.... >we get $|f(x)-L| < \epsilon$. $\blacksquare$ And a strategy to pick $\delta$ when we have a non-linear case: > (1) Start from $|f(x) - L | <\epsilon$, and see if we can extract the term $|x-a|$ by algebra. > (2) Once we have $|x-a| < (\text{junk term}) \cdot \epsilon$, then we are in business. > (3) Impose an additional condition on $|x-a| < D$, where $D$ is an appropriately small number so we can bound the $(\text{junk term})$ from below, say $C < (\text{junk term})$. With this additional condition, we would then have $C\epsilon < (\text{junk term)}\cdot \epsilon$. > (4) Finally with these two bounds, take $\delta =\min(D,C \epsilon)$. Now the reason why this "strategy" works well because these are nicely behaved function that we can control their behavior quite well. In a monstrous situation, we may not have such luck, but this is a start, and we won't engage too much in monstrous situations in our class. You will see them in a future math class called **analysis**, if you so choose to pursue this path... ////